49 results
17 - Factoring finite-state machines
- from Part IV - Synchronous sequential logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 375-397
-
- Chapter
- Export citation
-
Summary
Factoring a state machine is the process of splitting the machine into two or more simpler machines. Factoring can greatly simplify the design of a state machine by separating orthogonal aspects of the machine into separate FSMs where they can be handled independently. The separate FSMs communicate via logic signals. One FSM provides input control signals to another FSM and senses its output status signals. Such factoring, if done properly, makes the machine simpler and also makes it easier to understand and maintain – by separating issues.
In a factored FSM, the state of each sub-machine represents one dimension of a multidimensional state space. Collectively the states of all of the sub-machines define the state of the overall machine – a single point in this state space. The combined machine has a number of states that is equal to the product of the number of states of the individual sub-machines – the number of points in the state space. With individual sub-machines having a few tens of states, it is not unusual for the overall machine to have thousands to millions of states. It would be impractical to handle such a large number of states without factoring.
We have already seen one form of factoring in Section 16.3 where we developed a state machine with a datapath component and a control component. In effect, we factored the total state of the machine into a datapath portion and a control portion. Here we generalize this concept by showing how the control portion itself can be factored.
In this chapter, we illustrate factoring by working two examples. In the first example, we start with a flat FSM and factor it into multiple simpler FSMs. In the second example we derive a factored FSM directly from the specification, without bothering with the flat FSM. Most real FSMs are designed using the latter method. A factoring is usually a natural outgrowth of the specification of a machine. It is rarely applied to an already flat machine.
10 - Arithmetic circuits
- from Part III - Arithmetic circuits
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 221-249
-
- Chapter
- Export citation
21 - System-level design
- from Part VI - System design
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 467-478
-
- Chapter
- Export citation
-
Summary
After reading to this point in the book, you now have the skills to design complex combinational and sequential logic modules. However, if someone were to ask you to design a DVD player, a computer system, or an Internet router you would realize that each of these is not a single finite-state machine (or even a single datapath with associated finite-state controller). Rather, a typical system is a collection of modules, each of which may include several datapaths and finite-state controllers. These systems must first be decomposed into simple modules before the design and analysis skills you have learned in the previous chapters can be applied. However, the problem remains that of how to partition the system to this level where the design becomes manageable. This system-level design is one of the most interesting and challenging aspects of digital systems.
SYSTEM DESIGN PROCESS
The design of a system involves the following steps.
Specification The most important step in designing any system is deciding – and clearly specifying in writing – what you are going to build. We discuss specifications in more detail in Section 21.2.
Partitioning Once the system has been specified, the main task in system design is dividing the system into manageable subsystems or modules. This is a process of divide and conquer. The overall system is divided into subsystems that can then be designed (conquered) separately. At each stage, the subsystems should be specified to the same level of detail as the overall system was during our first step. As described in Section 21.3, we can partition a system by state, task, or interface.
Interface specification It is particularly important that the interfaces between subsystems be described in detail. With good interface specifications, individual modules can be developed and verified independently. When possible, interfaces should be independent of module internals – allowing modules to be modified without affecting the interface, or the design of neighboring modules.
Timing design Early in the design of a system, it is important to describe the timing and sequencing of operations. In particular, as work flows between modules, the sequencing of which module does a particular task on a particular cycle must be worked out to ensure that the right data come together at the correct place and time. This timing design also drives the performance tuning step described below.
Preface
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp xv-xix
-
- Chapter
- Export citation
-
Summary
This book is intended to teach an undergraduate student to understand and design digital systems. It teaches the skills needed for current industrial digital system design using a hardware description language (VHDL) and modern CAD tools. Particular attention is paid to systemlevel issues, including factoring and partitioning digital systems, interface design, and interface timing. Topics needed for a deep understanding of digital circuits, such as timing analysis, metastability, and synchronization, are also covered. Of course, we cover the manual design of combinational and sequential logic circuits. However, we do not dwell on these topics because there is far more to digital system design than designing such simple modules.
Upon completion of a course using this book, students should be prepared to practice digital design in industry. They will lack experience, but they will have all of the tools they need for contemporary practice of this noble art. The experience will come with time.
This book has grown out of more than 25 years of teaching digital design to undergraduates (CS181 at Caltech, 6.004 at MIT, EE121 and EE108A at Stanford). It is also motivated by 35 years of experience designing digital systems in industry (Bell Labs, Digital Equipment, Cray, Avici, Velio Communications, Stream Processors, and NVIDIA). It combines these two experiences to teach what students need to know to function in industry in a manner that has been proven to work on generations of students. The VHDL guide in Appendix B is informed by nearly a decade of teaching VHDL to undergraduates at UBC (EECE 353 and EECE 259).
We wrote this book because we were unable to find a book that covered the system-level aspects of digital design. The vast majority of textbooks on this topic teach the manual design of combinational and sequential logic circuits and stop. While most texts today use a hardware description language, the vast majority teach a TTL-esque design style that, while appropriate in the era of 7400 quad NAND gate parts (the 1970s), does not prepare a student to work on the design of a three-billion-transistor GPU. Today's students need to understand how to factor a state machine, partition a design, and construct an interface with correct timing. We cover these topics in a simple way that conveys insight without getting bogged down in details.
25 - Memory systems
- from Part VI - System design
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 532-548
-
- Chapter
- Export citation
-
Summary
Memory is widely used in digital systems for many different purposes. In a processor, SDDR DRAM chips are used for main memory and SRAM arrays are used to implement caches, translation lookaside buffers, branch prediction tables, and other internal storage. In an Internet router (Figure 23.3(b)), memory is used for packet buffers, for routing tables, to hold per-flow data, and to collect statistics. In a cellphone SoC, memory is used to buffer video and audio streams.
A memory is characterized by three key parameters: its capacity, its latency, and its throughput. Capacity is the amount of data stored, latency is the amount of time taken to access data, and throughput is the number of accesses that can be done in a fixed amount of time.
A memory in a system, e.g., the packet buffer in a router, is often composed of multiple memory primitives: on-chip SRAM arrays or external DRAM chips. The number of primitives needed to realize a memory is governed by its capacity and its throughput. If one primitive does not have sufficient capacity to realize the memory, multiple primitives must be used – with just one primitive accessed at a time. Similarly, if one primitive does not have sufficient bandwidth to provide the required throughput, multiple primitives must be used in parallel – via duplication or interleaving.
MEMORY PRIMITIVES
The vast majority of all memories in digital systems are implemented from two basic primitives: on-chip SRAM arrays and external SDDR DRAM chips. Here we will consider these memory primitives as black boxes, discussing their properties and how to interface to them. It is beyond the scope of this book to look inside the box and study their implementation.
SRAM arrays
On-chip SRAM arrays are useful for building small, fast, dedicated memories integrated near the logic that produces and consumes the data they store. While the total capacity of the SRAM that can be realized on one chip (about 400 Mb) is small compared with a single 4 Gb DRAM chip, these arrays can be accessed in a single clock cycle, compared with 25 cycles or more for a DRAM access.
Appendix A - VHDL coding style
- from Part VIII - Appendix: VHDL coding style and syntax guide
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 611-621
-
- Chapter
- Export citation
-
Summary
In this appendix we present a few guidelines for VHDL coding style. These guidelines grew out of experience with both VHDL and Verilog description languages. These guidelines developed over many years of teaching students digital design and managing design projects in industry, where the guidelines have been proven to reduce effort, to produce better final designs, and to produce designs that are more readable and maintainable. The many examples of VHDL throughout this book serve as examples of this style. The style presented here is intended for synthesizable designs – VHDL design entities that ultimately map to real hardware. A very different style is used in testbenches. This section is not intended to be a reference manual for VHDL. The following appendix provides a brief summary of the VHDL syntax used in this book and many other references are available online. Rather, this section gives a set of principles and style rules that help designers write correct, maintainable code. A reference manual explains what legal VHDL code is. This appendix explains what good VHDL code is. We give examples of good code and bad code, all of which are legal.
BASIC PRINCIPLES
We start with a few basic principles on which our VHDL style is based. The style presented is essentially a VHDL-2008 equivalent to a set of Verilog style guidelines based upon our experience over many years of teaching students digital design and managing design projects in industry combined with nearly a decade of experience teaching earlier versions of VHDL.
Know where your state is Every bit of state in your design should be explicitly declared. In our style, all state is in explicit flip-flop or register components, and all other portions are purely combinational. This approach avoids a host of problems that arise when writing sequential statements directly within an “if rising\_edge(clk) then” statement inside a process. It also makes it much easier to detect inferred latches that occur when not all signals are assigned in all branches of a conditional statement.
Understand what your design entities will synthesize to When you write a design entity, you should have a good idea what logic will be generated. If your design entity is described structurally, wiring together other components, the outcome is very predictable. Small behavioral design entities and arithmetic blocks are also very predictable.
20 - Verification and test
- from Part V - Practical design
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 453-464
-
- Chapter
- Export citation
-
Summary
Verification and test are engineering processes that complement design. Verification is the task of ensuring that a design meets its specification. On a typical digital systems project, more effort is expended on verification than on the design itself. Because of the high cost and long delays involved in fabricating a chip, thorough verification is essential to ensure that the chip works the first time. A design error that is not caught during verification would result in costly delays and retooling.
Testing is performed to ensure that a particular instantiation of a design functions properly. When a chip is fabricated, some transistors, wires, or contacts may be faulty. A manufacturing test is performed to detect these faults so the device can be repaired or discarded.
DESIGN VERIFICATION
Simulation is the primary tool used to verify that a design meets its specification. The design is simulated using a number of tests that provide stimulus to the unit being tested and check that the design produces correct outputs. The VHDL testbenches we have seen throughout this book are examples of such tests.
Verification coverage
The verification challenge amounts to ensuring that the set of test patterns, the test suite, written to verify a design is complete. We measure the degree of completion of a test suite by its coverage of the specification and of the implementation. We typically insist on 100% coverage of both specification features and implementation lines or edges to consider the design verified.
The specification coverage of a set of tests is measured by determining the fraction of features in the specification that are exercised and checked by the tests. For example, suppose you have developed a digital clock chip that includes a day/date and an alarm function. Table 20.1 gives a partial list of features to be tested. Even for something as simple as a digital clock, the list of features can easily run into the hundreds. For a complex chip it is not unusual to have 105 or more features. Each test verifies one or more features. As tests are written, the features covered by each test are checked off.
14 - Sequential logic
- from Part IV - Synchronous sequential logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 305-327
-
- Chapter
- Export citation
-
Summary
The output of sequential logic depends not only on its input, but also on its state, which may reflect the history of the input. We form a sequential logic circuit via feedback – feeding state variables computed by a block of combinational logic back to its input. General sequential logic, with asynchronous feedback, can become complex to design and analyze due to multiple state bits changing at different times. We simplify our design and analysis tasks in this chapter by restricting ourselves to synchronous sequential logic, in which the state variables are held in a register and updated on each rising edge of a clock signal (clk).
The behavior of a synchronous sequential logic circuit, or finite-state machine (FSM), is completely described by two logic functions: one that computes its next state as a function of its input and present state, and one that computes its output – also as a function of its input and present state. We describe these two functions by means of a state table, or graphically with a state diagram. If states are specified symbolically, a state assignment maps the symbolic states onto a set of bit vectors – both binary and one-hot state assignments are commonly used.
Given a state table (or state diagram) and a state assignment, the task of implementing a finite-state machine is a simple one of synthesizing the next-state and output logic functions. For a one-hot state encoding, the synthesis is particularly simple because each state maps to a separate flip-flop and all edges in the state diagram leading to a state map into a logic function on the input of that flip-flop. For binary encodings, Karnaugh maps for each bit of the state vector are written and reduced to logic equations.
Finite-state machines can be implemented in VHDL by creating a state register to hold the current state, and describing the next-state and output functions with combinational logic descriptions, such as case statements as described in Chapter 7. State assignments should be specified using constants to allow them to be changed without altering the machine description itself. Special attention should be given to resetting the FSM to a known state at startup.
References
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 653-657
-
- Chapter
- Export citation
29 - Synchronizer design
- from Part VII - Asynchronous logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 592-608
-
- Chapter
- Export citation
-
Summary
In a synchronous system, we can avoid putting our flip-flops in illegal or metastable states by always obeying the setup- and hold-time constraints. When sampling asynchronous signals or crossing between different clock domains, however, we cannot guarantee that these constraints will be met. In these cases, we design a synchronizer that, through a combination of waiting for metastable states to decay and isolation, reduces the probability of synchronization failure.
A brute-force synchronizer consisting of two back-to-back flip-flops is commonly used to synchronize single-bit signals. The first flip-flop samples the asynchronous signal and the second flip-flop isolates the possibly bad output of the first flip-flop until any illegal states are likely to have decayed. Such a brute-force synchronizer cannot be used on multi-bit signals unless they are encoded with a Gray code. If multiple bits are in transition when sampled by the synchronizer, they are independently resolved, possibly resulting in incorrect codes, with some bits sampled before the transition and some after the transition. We can safely synchronize multi-bit signals with a FIFO (first-in first-out) synchronizer. A FIFO serves both to synchronize the signals and to provide flow control, ensuring that each datum produced by a transmitter in one clock domain is sampled exactly once by a receiver in another clock domain – even when the clocks have different frequencies.
WHERE ARE SYNCHRONIZERS USED?
Synchronizers are used in two distinct applications, as shown in Figure 29.1. First, when signals are coming from a truly asynchronous source, they must be synchronized before being input to a synchronous digital system. For example, a push-button switch pressed by a human produces an asynchronous signal. This signal can transition at any time, and so must be synchronized before it can be input to a synchronous circuit. Numerous physical detectors also generate truly asynchronous inputs. Photodetectors, temperature sensors, pressure sensors, etc. all produce outputs with transitions that are gated by physical processes, not a clock.
11 - Fixed- and floating-point numbers
- from Part III - Arithmetic circuits
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 250-268
-
- Chapter
- Export citation
-
Summary
In Chapter 10 we introduced the basics of computer arithmetic: adding, subtracting, multiplying, and dividing binary integers. In this chapter we continue our exploration of computer arithmetic by looking at number representation in more detail. Often integers do not suffice for our needs. For example, suppose we wish to represent a pressure that varies between 0 (vacuum) and 0.9 atmospheres with an error of at most 0.001 atmospheres. Integers don't help us much when we need to distinguish 0.899 from 0.9. For this task we will introduce the notion of a binary point (similar to a decimal point) and use fixed-point binary numbers.
In some cases, we need to represent data with a very large dynamic range. For example, suppose we need to represent time intervals ranging from 1 ps (10−12 s) to one century (about 3 × 109 s) with an accuracy of 1%. To span this range with a fixed-point number would require 72 bits. However, if we use a floating-point number – in which we allow the position of the binary point to vary – we can get by with 13 bits: six bits to represent the number and seven bits to encode the position of the binary point.
REPRESENTATION ERROR: ACCURACY, PRECISION, AND RESOLUTION
With digital electronics, we represent a number, x, as a string of bits, b. Many different number Systems are used in digital systems. A number system can be thought of as two functions R and V. The representation function R maps a number x from some set of numbers (e.g., real numbers, integers, etc.) into a bit string b: b = R(x). The value function V returns the number (from the same set) represented by a particular bit string: y = V(b).
Consider mapping to and from the set of real numbers in some range. Because there are more possible real numbers than there are bit strings of a given length, many real numbers necessarily map to the same bit string. Thus, if we map a real number to a bit string with R and then back with V we will almost always get a slightly different real number than we started with. That is, if we compute y = V(R(x)) then y and x will differ.
7 - VHDL descriptions of combinational logic
- from Part II - Combinational logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 129-156
-
- Chapter
- Export citation
-
Summary
In Chapter 6 we saw how to synthesize combinational logic circuits manually from a specification. In this chapter we show how to describe combinational circuits in the VHDL hardware description language, building on our discussion of Boolean expressions in VHDL (Section 3.6) and the initial discussion of VHDL (Section 1.5). Once the function has been described in VHDL, it can be automatically synthesized, eliminating the need for manual synthesis.
Because all optimization is done by the synthesizer, the main goal in writing synthesizable VHDL is to make it easily readable and maintainable. For this reason, descriptions that are close to the function of a design (e.g., a truth table specified with a case statement) are preferable to those that are close to the implementation (e.g., equations using a concurrent assignment statement, or a structural description using gates). Descriptions that specify just the function tend to be easier to read and maintain than those that reflect a manual implementation of the function.
To verify that a VHDL design entity is correct, we write a testbench. A testbench is a piece of VHDL code that is used during simulation to instantiate the design entity to be tested, generate input stimulus, and check the design entity's outputs. While design entities must be coded in a strict synthesizable subset of VHDL, testbenches, which are not synthesized, can use the full VHDL language, including looping constructs. In a typical modern digital design project, at least as much effort goes into design verification (writing testbenches) as goes into doing the design itself.
THE PRIME NUMBER CIRCUIT IN VHDL
In describing combinational logic using VHDL we restrict our use of the language to constructs that can easily be synthesized into logic circuits.
Specifically, we restrict combinational circuits to be described using only concurrent signal assignment statements, case statements, if statements, or by the structural composition of other combinational design entities.
In this section we look at four ways of implementing the prime number (plus 1) circuit we introduced in Chapter 6 as combinational VHDL.
24 - Interconnect
- from Part VI - System design
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 521-531
-
- Chapter
- Export citation
-
Summary
The interconnect between modules is as important a component of most systems as the modules being connected. As described in Section 5.6, wires account for a large fraction of the delay and power in a typical system. A wire of just 3 μm in length has the same capacitance (and hence dissipates the same power) as a minimum-sized inverter. A wire of about 100 μm in length dissipates about the same power as one bit of a fast adder.
Whereas simple systems are connected with direct point-to-point connections between modules, larger and more complex systems are better organized with a bus or a network. Consider an analogy to a telephone or intercom system. If you need to talk to only two or three people, you might use a direct line to each person you need to talk to. However, if you need to talk to hundreds of people, you would use a switching system, allowing you to dial any of your correspondents over a shared interconnect.
ABSTRACT INTERCONNECT
Figure 24.1 shows a high-level view of a system using a general interconnect (e.g., a bus or a network). A number of clients are connected to the network by a pair of links to and from the interconnect. The links may be serialized (Section 22.3), and flow control is required on at least the link into the interconnect – to back-pressure the client in the event of contention.
To communicate, client S (the source client), transmits a packet over the link iS into the interconnect. The packet includes, at minimum, a destination address, D, and a payload, P, which may be of arbitrary (or even variable) length. The interconnect, possibly with some delay due to contention, delivers P to client D over link oD out of the interconnect. The payload P may contain a request type (e.g., read or write), a local address within D, and data or other arguments for a remote operation. Because the interconnect is addressed, any client A can communicate with any client B while requiring only a single pair of unidirectional links on each client module.
13 - Arithmetic examples
- from Part III - Arithmetic circuits
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 290-302
-
- Chapter
- Export citation
12 - Fast arithmetic circuits
- from Part III - Arithmetic circuits
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 269-289
-
- Chapter
- Export citation
-
Summary
In this chapter, we look at three methods for improving the speed of arithmetic circuits, and in particular multipliers. We start in Section 12.1 by revisiting binary adders and see how to reduce their delay from O(n) to O(log(n)) by using hierarchical carry-look-ahead circuits. This technique can be applied directly to build fast adders and is also used to accelerate the summation of partial products in multipliers. In Section 12.2 we see how the number of partial products that need to be summed in a multiplier can be greatly reduced by recoding one of the inputs as a sequence of higher-radix, signed digits. Finally, in Section 12.3 we see how the partial products can be accumulated with O(log(n)) delay by using a tree of full adders. The combination of these three techniques into a fast multiplier is left as Exercises 12.17 to 12.20.
CARRY LOOK-AHEAD
Recall that the adder developed in Section 10.2 is called a ripple-carry adder because a transition on the carry signal must ripple from bit to bit to affect the final value of the MSB of the sum. This ripple-carry results in an adder delay that increases linearly with the number of bits in the adder. For large adders, this linear delay becomes prohibitive.
We can build an adder with a delay that increases logarithmically, rather than linearly, with the width of the adder by using a dual-tree structure as shown in Figure 12.1. This circuit works by computing carry propagate and carry generate across groups of bits in the upper tree and then using these signals to generate the carry signal into each bit in the lower tree. The propagate signal pij is true if a carry into bit i will propagate from bit i to bit j and generate a carry out of bit j. The generate signal gij is true if a carry will be generated out of bit j regardless of the carry into bit i.
Part III - Arithmetic circuits
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 219-220
-
- Chapter
- Export citation
27 - Flip-flops
- from Part VII - Asynchronous logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 566-579
-
- Chapter
- Export citation
-
Summary
Flip-flops are among the most critical circuits in a modern digital system. As we have seen in previous chapters, flip-flops are central to all synchronous sequential logic. Registers (built from flip-flops) hold the state (both control and data state) of all of our finite-state machines. In addition to this central role in logic design, flip-flops also consume a large fraction of the die area, power, and cycle time of a typical digital system.
Until now, we have considered a flip-flop as a black box. In this chapter, we study the internal workings of the flip-flop. We derive the logic design of a typical D flip-flop and show how the timing properties introduced in Chapter 15 follow from this design.
We first develop the flip-flop design informally – following an intuitive argument. We start by developing the latch. The implementation of a latch follows directly from its specification. From the implementation we can then derive the setup, hold, and delay times of the latch. We then see how to build a flip-flop by combining two latches in a master–slave arrangement. The timing properties of the flip-flop can then be derived from its implementation.
Following this informal development, we then derive the design of a latch and flip-flop using flow-table synthesis. This serves both to reinforce the properties of these storage elements and to give a good example of flow-table synthesis. We introduce the concept of state equivalence during this derivation. This formal derivation can be skipped by a casual reader.
INSIDE A LATCH
A schematic symbol for a latch is shown in Figure 27.1(a), and waveforms illustrating its behavior and timing are shown in Figure 27.1(b). A latch has two inputs, data d and enable g, and one output, q. When the enable input is high, the output follows the input. When the enable input is low, the output holds its current state.
As shown in Figure 27.1(b), a latch, like a flip-flop, has a setup time ts and a hold time th. An input must be setup ts before the enable falls and held for th after the enable has fallen in order for the input value to be correctly stored. Latch delay is characterized by both delay from the enable rising to the output changing, tdGQ, and delay from the data input changing to the output changing, tdDQ.
Part IV - Synchronous sequential logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 303-304
-
- Chapter
- Export citation
Acknowledgments
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp xx-xx
-
- Chapter
- Export citation
6 - Combinational logic design
- from Part II - Combinational logic
- William J. Dally, Stanford University, California, R. Curtis Harting, Tor M. Aamodt, University of British Columbia, Vancouver
-
- Book:
- Digital Design Using VHDL
- Published online:
- 01 February 2019
- Print publication:
- 17 December 2015, pp 105-128
-
- Chapter
- Export citation
-
Summary
Combinational logic circuits implement logical functions on a set of inputs. Used for control, arithmetic, and data steering, combinational circuits are the heart of digital systems. Sequential logic circuits (see Chapter 14) use combinational circuits to generate their next state functions.
In this chapter we introduce combinational logic circuits and describe a procedure to design these circuits given a specification. At one time, before the mid 1980s, such manual synthesis of combinational circuits was a major part of digital design practice. Today, however, designers write the specification of logic circuits in a hardware description language (like VHDL) and the synthesis is performed automatically by a computer-aided design (CAD) program.
We describe the manual synthesis process here because every digital designer should understand how to generate a logic circuit from a specification. Understanding this process allows the designer to better use the CAD tools that perform this function in practice, and, on rare occasions, to generate critical pieces of logic manually.
COMBINATIONAL LOGIC
As illustrated in Figure 6.1, a combinational logic circuit generates a set of outputs whose state depends only on the current state of the inputs. Of course, when an input changes state, some time is required for an output to reflect this change. However, except for this delay the outputs do not reflect the history of the circuit.With a combinational circuit, a given input state will always produce the same output state regardless of the sequence of previous input states. A circuit where the output depends on previous input states is called a sequentiall circuit (see Chapter 14).
For example, a majority circuit, a logic circuit that accepts n inputs and outputs a 1 if at least ⌊n/2+1⌋ of the inputs are 1, is a combinational circuit. The output depends only on the number of 1s in the present input state. Previous input states do not affect the output.
On the other hand, a circuit that outputs a 1 if the number of 1s in the n inputs is greater than the previous input state is sequential (not combinational). A given input state, e.g., ik = 011, can result in o = 1 if the previous input was ik−1 = 010, or it can result in o = 0 if the previous input was ik−1 = 111.